Newest journal articles by subject

Subjects > Computer Science

Accurate structure prediction of biomolecular interactions with AlphaFold 3
Josh Abramson;Jonas AdlerJack DungerRichard EvansTim GreenA. PritzelOlaf RonnebergerLindsay WillmoreAndrew J BallardJoshua BambrickSebastian BodensteinDavid A EvansChia-Chun HungMichael O’NeillD. ReimanKathryn TunyasuvunakoolZachary WuAkvilė ŽemgulytėEirini ArvanitiCharles BeattieOttavia BertolliAlex BridglandAlexey CherepanovMiles CongreveA. Cowen-RiversAndrew CowieMichael FigurnovFabian B FuchsHannah GladmanRishub JainYousuf A. KhanCaroline M R LowKuba PerlinAnna PotapenkoPascal SavySukhdeep SinghA. SteculaAshok ThillaisundaramCatherine TongSergei YakneenEllen D. ZhongMichal ZielinskiAugustin ŽídekV. BapstPushmeet KohliMax JaderbergD. HassabisJ. Jumper
Nature Published 2024/05/08

Summary:

The new AlphaFold model with a substantially updated diffusion-based architecture that is capable of predicting the joint structure of complexes including proteins, nucleic acids, small molecules, ions and modified residues is described, showing that high-accuracy modelling across biomolecular space is possible within a single unified deep-learning framework.

The Llama 3 Herd of Models
Abhimanyu Dubey;Abhinav JauhriAbhinav PandeyAbhishek KadianAhmad Al-DahleAiesha LetmanAkhil MathurAlan ScheltenAmy YangAngela FanAnirudh GoyalAnthony S. HartshornAobo YangArchi MitraArchie SravankumarArtem KorenevArthur HinsvarkArun RaoAston ZhangAur'elien RodriguezAusten GregersonAva SpataruBaptiste RozièreBethany BironBinh TangBobbie ChernC. CaucheteuxChaya NayakChloe BiChris MarraChris McConnellChristian KellerChristophe TouretChunyang WuCorinne WongCris-tian Cantón FerrerCyrus NikolaidisDamien AllonsiusDaniel SongDanielle PintzDanny LivshitsDavid EsiobuDhruv ChoudharyDhruv MahajanDiego Garcia-OlanoDiego PerinoDieuwke HupkesEgor LakomkinEhab A. AlBadawyElina LobanovaEmily DinanEric Michael SmithFilip RadenovicFrank ZhangGabriele SynnaeveGabrielle LeeGeorgia Lewis AndersonGraeme NailG. MialonGuanglong PangGuillem CucurellHailey NguyenHannah KorevaarHu XuHugo TouvronIliyan ZarovImanol Arrieta IbarraIsabel M. KloumannIshan MisraIvan EvtimovJade CopetJaewon LeeJan GeffertJana VranesJason ParkJay MahadeokarJeet ShahJ. V. D. LindeJennifer BillockJenny HongJenya LeeJeremy FuJianfeng ChiJianyu HuangJiawen LiuJie WangJiecao YuJoanna BittonJoe SpisakJongsoo ParkJoseph RoccaJ. JohnstunJoshua SaxeJu-Qing JiaKalyan Vasuden AlwalaK. UpasaniKate PlawiakKeqian LiK. HeafieldKevin R. StoneKhalid El-AriniKrithika IyerKshitiz MalikKuen-ley ChiuKunal BhallaLauren Rantala-YearyL. MaatenLawrence ChenLiang TanLiz JenkinsLouis MartinLovish MadaanLubo MaloLukas BlecherLukas LandzaatLuke de OliveiraMadeline MuzziMahesh PasupuletiMannat SinghManohar PaluriMarcin KardasMathew OldhamMathieu RitaMaya PavlovaM. KambadurMike LewisMin SiMitesh Kumar SinghMona HassanNaman GoyalNarjes TorabiNiko-lay BashlykovNikolay BogoychevNiladri S. ChatterjiOlivier DuchenneOnur cCelebiPatrick AlrassyPengchuan ZhangPengwei LiPetar VasićPeter WengPrajjwal BhargavaP. DubalPraveen KrishnanPunit Singh KouraPuxin XuQing HeQingxiao DongRagavan SrinivasanRaj GanapathyRamon CaldererRicardo Silveira CabralRobert StojnicRoberta RaileanuRohit GirdharRohit PatelRo-main SauvestreRon-nie PolidoroRoshan SumbalyRoss TaylorRuan SilvaRui HouRui WangS. HosseiniSa-hana ChennabasappaSanjay SinghSean BellSeohyun Sonia KimSergey EdunovShaoliang NieSharan NarangS. RaparthySheng ShenShengye WanShruti BhosaleShun ZhangSimon VandenhendeSoumya BatraSpencer WhitmanSten SootlaStephane CollotSuchin GururanganS. BorodinskyTamar HermanTara FowlerTarek SheashaThomas GeorgiouThomas ScialomTobias SpeckbacherTodor MihaylovTong XiaoUjjwal KarnVedanuj GoswamiVibhor GuptaVignesh RamanathanViktor KerkezVincent GonguetVir-ginie DoVish VogetiVladan PetrovicWeiwei ChuWenhan XiongWenyin FuWhit-ney MeersXavier MartinetXiaodong WangXiaoqing Ellen TanXinfeng XieXuchao JiaXuewei WangYaelle GoldschlagYashesh GaurYasmine BabaeiYiqian WenYiwen SongYuchen ZhangYue LiYuning MaoZacharie Delpierre CoudertZhengxu YanZhengxing ChenZoe PapakiposAaditya K. SinghAaron GrattafioriAbha JainAdam KelseyAdam ShajnfeldAdi GangidiAdolfo VictoriaAhuva GoldstandA. MenonAjay SharmaAlex BoesenbergAlex VaughanAlexei BaevskiAllie FeinsteinAmanda KalletAmit SanganiAnam YunusAndrei LupuAndres AlvaradoA. CaplesAndrew GuAndrew HoAndrew PoultonAndrew RyanAnkit RamchandaniAnnie FrancoAparajita SarafArkabandhu ChowdhuryAshley GabrielAshwin BharambeAssaf EisenmanAzadeh YazdanBeau JamesBen MaurerB. LeonhardiPo-Yao (Bernie) HuangBeth LoydBeto de PaolaBhargavi ParanjapeBing LiuBo WuBoyu NiBraden HancockBram WastiBrandon SpenceBrani StojkovicBrian GamidoBritt MontalvoCarl ParkerCarly BurtonCatalina MejiaChanghan WangChangkyu KimChao ZhouChester HuChing-Hsiang ChuChris CaiChris TindalChristoph FeichtenhoferDamon CivinDana BeatyDaniel KreymerShang-Wen LiDanny WyattDavid AdkinsDavid XuDavide TestuggineDelia DavidDevi ParikhDiana LiskovichDidem FossDingkang WangDuc LeDustin HollandEdward DowlingEissa JamilElaine MontgomeryEleonora PresaniEmily HahnEmily WoodErik BrinkmanEsteban ArcauteEvan DunbarEvan SmothersFei SunFelix KreukFeng TianFirat OzgenelFrancesco CaggioniF. Guzm’anFrank J. KanayetFrank SeideGabriela Medina FlorezGabriella SchwarzGada BadeerGeorgia SweeGil HalpernG. ThattaiGrant HermanG. SizovGuangyi ZhangGuna LakshminarayananHamid ShojanazeriHan ZouHannah WangHan ZhaHaroun HabeebHarrison RudolphHelen SukHenry AspegrenHunter GoldmanIgor MolybogIgor TufanovIrina-Elena VelicheItai GatJake WeissmanJames GeboskiJames KohliJaphet AsherJean-Baptiste GayaJeff MarcusJeff TangJennifer ChanJenny ZhenJeremy ReizensteinJ. TeboulJessica ZhongJian JinJingyi YangJoe CummingsJon CarvillJon ShepardJ. McPhieJonathan TorresJosh GinsburgJunjie WangKaixing(Kai) WuU. KamHouKaran SaxenaKarthik PrasadKartikay KhandelwalKatayoun ZandKathy MatosichK. VeeraraghavanKelly MichelenaKeqian LiKun HuangKunal ChawlaKushal LakhotiaKyle HuangLailin ChenLakshya GargA. LavenderLeandro SilvaLee BellLei ZhangLiangpeng GuoLicheng YuLiron MoshkovichLuca WehrstedtMadian KhabsaManav AvalaniManish BhattM. TsimpoukelliMartynas MankusMatan HassonM. LennieMatthias ResoMaxim GroshevMaxim NaumovMaya LathiMeghan KeneallyM. SeltzerMichal ValkoMichelle RestrepoMihir PatelMik VyatskovMikayel SamvelyanMike ClarkMike MaceyMike WangMiquel Jubert HermosoMo MetanatMohammad RastegariMunish BansalNandhini SanthanamNatascha ParksNatasha WhiteNavy-ata BawaNayan SinghalNick EgeboNicolas UsunierNikolay Pavlovich LaptevNing DongNing ZhangNorman ChengOleg ChernoguzOlivia HartOmkar SalpekarOzlem KalinliParkin KentParth ParekhPaul SaabPavan BalajiPe-dro RittnerPhilip BontragerPierre RouxPiotr DollárPolina ZvyaginaPrashant RatanchandaniP. YuvrajQian LiangRachad AlaoRachel RodriguezRafi AyubRaghotham MurthyRaghu NayaniRahul MitraRaymond LiRebekkah HoganRobin BatteyRocky WangRohan MaheswariRuss HowesRuty RinottSai Jayesh BonduSamyak DattaSara ChughSara HuntSargun DhillonSasha SidorovSatadru PanSaurabh VermaSeiji YamamotoSharadh RamaswamyShaun LindsaySheng FengShenghao LinS. ZhaShiva ShankarShuqiang ZhangSinong WangSneha AgarwalS. SajuyigbeSoumith ChintalaStephanie MaxStephen ChenSteve KehoeSteve SatterfieldSudarshan GovindaprasadSumit GuptaSung-Bae ChoSunny VirkSuraj SubramanianSy ChoudhurySydney GoldmanTal RemezTamar GlaserTamara BestThilo KohlerThomas RobinsonTianhe LiTianjun ZhangTim MatthewsTimothy ChouTzook ShakedVarun VontimittaVictoria AjayiVictoria MontanezVijai MohanVinay Satish KumarVishal ManglaVlad IonescuV. PoenaruVlad T. MihailescuVladimir IvanovWei LiWenchen Wang
ArXiv Published 2024/07/31

Summary:

It is found that Llama 3 delivers comparable quality to leading language models such as GPT-4 on a plethora of tasks, and performs competitively with the state-of-the-art on image, video, and speech recognition tasks.

Scaling Rectified Flow Transformers for High-Resolution Image Synthesis
Patrick Esser;Sumith KulalA. BlattmannRahim EntezariJonas MullerHarry SainiYam LeviDominik LorenzAxel SauerFrederic BoeselDustin PodellTim DockhornZion EnglishKyle LaceyAlex GoodwinYannik MarekRobin Rombach
ArXiv Published 2024/03/05

Summary:

This work improves existing noise sampling techniques for training rectified flow models by biasing them towards perceptually relevant scales and presents a novel transformer-based architecture for text-to-image generation that uses separate weights for the two modalities and enables a bidirectional flow of information between image and text tokens.

Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Marah Abdin;Sam Ade JacobsA. A. AwanJ. AnejaAhmed AwadallahH. AwadallaNguyen BachAmit BahreeArash BakhtiariHarkirat Singh BehlA. BenhaimMisha BilenkoJohan BjorckSébastien BubeckMartin CaiC. C. T. MendesWeizhu ChenVishrav ChaudharyParul ChopraAllison Del GiornoGustavo de RosaMatthew DixonRonen EldanDan IterAbhishek GoswamiS. GunasekarEmman HaiderJunheng HaoRussell J. HewettJamie HuynhMojan JavaheripiXin JinPiero KauffmannNikos KarampatziakisDongwoo KimYoung Jin KimMahoud KhademiLev KurilenkoJames R. LeeYin Tat LeeYuanzhi LiChen LiangWeishung LiuEric LinZeqi LinPiyush MadanArindam MitraHardik ModiAnh NguyenBrandon NorickBarun PatraD. Perez-BeckerThomas PortetReid PryzantHeyang QinMarko RadmilacLiliang RenCorby RossetSambudha RoyOlli SaarikiviAmin SaiedAdil SalimMichael SantacroceShital ShahNing ShangHiteshi SharmaXianmin SongOlatunji RuwasePraneetha VaddamanuXin WangRachel WardGuanhua WangP. WitteMichael WyattCan XuJiahang XuSonali YadavFan YangZiyi YangDonghan YuCheng-Yuan ZhangCyril ZhangJianwen ZhangL. ZhangYi ZhangYunan ZhangXiren ZhouYifan Yang
ArXiv Published 2024/04/22
Mixtral of Experts
Albert Q. Jiang;Alexandre SablayrollesAntoine RouxA. MenschBlanche SavaryChris BamfordDevendra Singh ChaplotDiego de Las CasasEmma Bou HannaFlorian BressandGianna LengyelGuillaume BourGuillaume LampleL'elio Renard LavaudLucile SaulnierM. LachauxPierre StockSandeep SubramanianSophia YangSzymon AntoniakTeven Le ScaoThéophile GervetThibaut LavrilThomas WangTimothée LacroixWilliam El Sayed
ArXiv Published 2024/01/08

Summary:

This work introduces Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model that vastly outperforms Llama 2 70B on mathematics, code generation, and multilingual benchmarks and provides a model fine-tuned to follow instructions, Mixtral 8x7B - Instruct, that surpasses GPT-3.5 Turbo, Claude-2.1, Gemini Pro, and Llama 2 70B - chat model on human benchmarks.

Gemma 2: Improving Open Language Models at a Practical Size
Gemma Team Morgane Riviere;Shreya PathakPier Giuseppe SessaCassidy HardinSurya BhupatirajuL'eonard HussenotThomas MesnardBobak ShahriariAlexandre Ram'eJohan FerretPeter LiuP. TaftiAbe FriesenMichelle CasbonSabela RamosRavin KumarCharline Le LanSammy JeromeAnton TsitsulinNino VieillardP. StańczykSertan GirginNikola MomchevMatt HoffmanS. ThakoorJean-Bastien GrillBehnam NeyshaburAlanna WaltonA. SeverynAlicia ParrishAliya AhmadAllen HutchisonAlvin AbdagicAmanda CarlAmy ShenAndy BrockAndy CoenenAnthony LaforgeAntonia PatersonBen BastianBilal PiotBoxi WuBrandon RoyalCharlie ChenChintu KumarChris PerryChristoper A. WeltyChristopher A. Choquette-ChooDanila SinopalnikovDavid WeinbergerDimple VijaykumarDominika Rogozi'nskaD. HerbisonElisa BandyEmma WangEric NolandErica MoreiraEvan SenterEvgenii EltyshevFrancesco VisinGabriel RasskinGary WeiGlenn CameronGus MartinsHadi HashemiHanna Klimczak-Pluci'nskaHarleen BatraH. DhandIvan NardiniJacinda MeinJack ZhouJames SvenssonJ. StanwayJetha ChanJin ZhouJoana CarrasqueiraJoana IljaziJocelyn BeckerJoe FernandezJoost R. van AmersfoortJosh GordonJosh LipschultzJoshua NewlanJunsong JiKareem MohamedKartikeya BadolaKat BlackKatie MillicanKeelin McDonellKelvin NguyenKiranbir SodhiaKish GreeneLars Lowe SjoesundLauren UsuiL. SifreL. HeuermannLeti-cia LagoLilly McNealusLivio Baldini SoaresLogan KilpatrickLucas DixonLuciano MartinsMachel ReidManvinder SinghMark IversonMartin GornerMat VellosoMateo WirthMatt DavidowMatt MillerMatthew RahtzMatthew WatsonMeg RisdalMehran KazemiMichael MoynihanMing ZhangMinsuk KahngMinwoo ParkMofi RahmanMohit KhatwaniNatalie DaoNen-shad BardoliwallaN. DevanathanNeta DumaiNilay ChauhanO. WahltinezPankil BotardaParker BarnesP. BarhamPaul MichelPeng-chong JinPetko GeorgievPhil CullitonPradeep KuppalaR. ComanescuRamona MerhejReena JanaR. RokniRishabh AgarwalRyan MullinsSamaneh SaadatS. M. CarthySarah PerrinSébastien M. R. ArnoldSe-bastian KrauseShengyang DaiS. GargShruti ShethSue RonstromSusan ChanTimothy JordanTing YuTom EcclesTom HenniganTomás KociskýTulsee DoshiVihan JainVikas YadavVilobh MeshramVishal DharmadhikariWarren BarkleyWei WeiWenming YeWoohyun HanWoosuk KwonXiang XuZhe ShenZhitao GongZichuan WeiVictor CotrutaPhoebe KirkAnand RaoMinh GiangLudovic PeranTris WarkentinEli CollinsJoelle BarralZ. GhahramaniR. HadsellD. SculleyJeanine BanksAnca DraganSlav PetrovO. VinyalsJeffrey DeanD. HassabisK. KavukcuogluClément FarabetElena BuchatskayaSebastian BorgeaudNoah FiedelArmand JoulinKathleen KenealyRobert DadashiAlek Andreev
ArXiv Published 2024/07/31

Summary:

Gemma 2, a new addition to the Gemma family of lightweight, state-of-the-art open models, ranging in scale from 2 billion to 27 billion parameters, delivers the best performance for their size, and even offers competitive alternatives to models that are 2-3 times bigger.

GPT-4o System Card
OpenAI Aaron Hurst;Adam LererAdam P. GoucherAdam PerelmanAditya RameshAidan ClarkAJ OstrowAkila WelihindaAlan HayesAlec RadfordAleksander MkadryAlex Baker-WhitcombAlex BeutelA. BorzunovAlex CarneyAlex ChowAlexander KirillovAlex NicholAlex PainoAlex RenzinAlexandre PassosAlexander KirillovAlexi ChristakisAlexis ConneauAli KamaliA. JabriAllison MoyerAllison TamAmadou CrookesAmin TootoochianAmin TootoonchianAnanya KumarAndrea ValloneA. KarpathyAndrew BraunsteinAndrew CannAndrew CodispotiAndrew GaluAndrew KondrichAndrew TullochAn-drey MishchenkoAngela BaekAngela JiangAn-toine PelisseAntonia WoodfordAnuj GosaliaArka DharAshley PantulianoAvi NayakAvital OliverBarret ZophB. GhorbaniBen LeimbergerBen RossenBenjamin SokolowskyBen WangBenjamin ZweigBeth HooverB. SamicBob McGrewBobby SperoBogo GiertlerBowen ChengBrad LightcapBrandon WalkinBrendan QuinnBrian GuarraciBrian HsuBright KelloggBrydon EastmanCamillo LugaresiCarroll L. WainwrightCary BassinCary HudsonCasey ChuChad NelsonChak LiC. ShernChanning CongerCharlotte BaretteChelsea VossChen DingCheng LuChong ZhangChris BeaumontChris HallacyChris KochC. GibsonChristina KimChristine ChoiChristine McLeaveyChris HesseClaudia FischerClemens WinterColey CzarneckiColin JarvisColin WeiConstantin KoumouzelisDane SherburnDaniel KapplerDaniel LevinDaniel LevyDavid CarrDavid FarhiDavid MélyDavid RobinsonDavid SasakiDenny JinDev ValladaresDimitris TsiprasDoug LiPhong Duc NguyenDuncan FindlayEdede OiwohEdmund WongEhsan AsdarElizabeth ProehlElizabeth YangEric AntonowEric KramerEric PetersonEric SiglerEric WallaceE. BrevdoEvan MaysFarzad KhorasaniF. SuchFilippo RasoFrancis ZhangFred von LohmannFreddie SulitGabriel GohGene OdenGeoff SalmonGiulio StaraceGreg BrockmanHadi SalmanHai-Biao BaoHaitang HuHannah WongHaoyu WangHeather SchmidtHeather WhitneyHee-woo JunHendrik KirchnerHenrique Pondé de Oliveira PintoHongyu RenHuiwen ChangHyung Won ChungI. KivlichanIan O’ConnellIan OsbandIan SilberIan Sohlİ. OkuyucuIkai LanIlya KostrikovI. SutskeverI. KanitscheiderIshaan GulrajaniJacob CoxonJacob MenickJ. PachockiJames AungJames BetkerJames CrooksJames LennonJ. KirosJan LeikeJane ParkJason KwonJason PhangJason TeplitzJason WeiJason WolfeJay ChenJeff HarrisJenia VaravvaJessica Gan LeeJessica ShiehJi LinJiahui YuJiayi WengJie TangJieqi YuJoanne JangJ. Q. CandelaJoe BeutlerJoe LandersJoel ParishJohannes HeideckeJohn SchulmanJonathan LachmanJonathan McKayJonathan UesatoJonathan WardJong Wook KimJoost HuizingaJordan SitkinJos KraaijeveldJoshua GrossJosh KaplanJosh SnyderJosh AchiamJoy JiaoJoyce LeeJuntang ZhuangJustyn HarrimanKai FrickeKai HayashiKaran SinghalKaty ShiKavin KarthikKayla WoodKendra RimbachKenny HsuKenny NguyenKeren Gu-LembergKevin ButtonKevin LiuKiel HoweK. MuthukumarKyle LutherLama AhmadLarry KaiLauren ItowLauren WorkmanLeher PathakLeo ChenLi JingLia GuyL. FedusLiang ZhouLien MamitsukaLilian WengLindsay McCallumLindsey HeldOuyang LongLouis FeuvrierLu ZhangLukasz KondraciukLukasz KaiserLuke HewittLuke MetzLyric DoshiMada AflakMaddie SimensMade-laine BoydMadeleine ThompsonMarat DukhanMark ChenMark GrayM. HudnallMarvin ZhangMarwan AljubehMa-teusz LitwinMatthew ZengMax JohnsonMaya ShettyMayank GuptaMeghan ShahM. YatbazMengxue YangMengchao ZhongMia GlaeseMianna ChenMichael JannerMichael LampeMichael PetrovMichael WuMichele WangMichelle FradinMichelle PokrassMiguel CastroMiguel CastroMikhail PavlovM. BrundageMiles WangMina KhanMira MuratiMo BavarianMolly LinMurat YesildalNacho SotoN. GimelsheinNa-talie ConeNatalie StaudacherNatalie SummersNatan LaFontaineNeil ChowdhuryNick RyderNick StathasNick TurleyN. TezakNiko FelixNithanth KudigeN. KeskarNoah DeutschNoel BundickNora PuckettOfir NachumOla OkelolaOleg BoikoO. MurkOliver JaffeOlivia WatkinsOlivier GodementOwen Campbell-MoorePatrick ChaoPaul McMillanPavel BelovPeng SuPeter BakPeter BakkumPeter DengPeter DolanPeter HoescheleP. WelinderPhil TilletPhilip ProninPhil TilletPrafulla DhariwalQim-ing YuanRachel DiasRachel LimRahul AroraRajan TrollRandall LinRaphael Gontijo LopesRaul PuriReah MiyaraR. LeikeRenaud GaubertReza ZamaniRicky WangRob DonnellyRob HonsbyRocky SmithRohan SahaiRohit RamchandaniRomain HuetRory CarmichaelRowan ZellersRoy ChenRuby ChenR. NigmatullinRyan CheuSaachi JainSam AltmanSam SchoenholzSam ToizerSamuel MiserendinoSandhini AgarwalSara CulverScott EthersmithScott GraySean GroveSean MetzgerShamez HermaniShantanu JainShengjia ZhaoSherwin WuShino JomotoShirong WuShuaiqi XiaSonia PheneSpencer PapaySrinivas NarayananSteve CoffeySteve LeeStewart HallS. BalajiTal BrodaTal StramerTao XuTarun GogineniTaya ChristiansonTed SandersTejal PatwardhanThomas CunninghmanThomas DegryThomas DimsonThomas RaouxThomas ShadwellTianhao ZhengTodd UnderwoodTodor MarkovToki SherbakovTom RubinTom StasiTomer KaftanTristan HeywoodTroy PetersonTyce WaltersTyna EloundouValerie QiVeit MoellerVinnie MonacoVishal KuoVlad FomenkoWayne ChangWeiyi ZhengWenda ZhouWesam ManassraWill SheuWojciech ZarembaYash PatilYilei QianYongjik KimYoulong ChengYu ZhangYuchen HeYuchen ZhangYujia JinYunxing DaiYury Malkov
ArXiv Published 2024/10/25

Summary:

This System Card provides a detailed look at GPT-4o's capabilities, limitations, and safety evaluations across multiple categories, focusing on speech-to-speech while also evaluating text and image capabilities, and measures the authors've implemented to ensure the model is safe and aligned.

Multicriteria Optimization and Decision Making: Principles, Algorithms and Case Studies
Michael T. M. Emmerich;A. Deutz
ArXiv Published 2024/06/29

Summary:

The introduction is organized in a unique didactic manner developed by the authors, starting from more simple concepts such as linear programming and single-point methods, and advancing from these to more difficult concepts such as optimality conditions for nonlinear optimization and set-oriented solution algorithms.

The T2K experiment
T. Abe;N. AbgrallH. AiharaY. AjimaJ. AlbertD. AllanP. AmaudruzC. AndreopoulosB. AndrieuM. AnerellaC. AngelsenS. AokiO. AraokaJ. ArgyriadesA. ArigaT. ArigaS. AssylbekovJ. Andr'eD. AutieroA. BadertscherO. BallesterM. BarbiG. BarkerP. BaronG. BarrL. BartoszekM. BatkiewiczF. BayS. BenthamV. BerardiB. BergerH. BernsI. BertramM. BesnierJ. BeucherD. BeznoskoS. BhadraP. BirneyD. BishopE. BlackmoreF. BlaszczykJ. BłockiA. BlondelA. BodekC. BojechkoJ. BouchezT. BoussugeS. BoydM. BoyerN. BraamR. BradfordA. BravarK. BriggsJ. BrinsonC. BronnerD. Brook-RobergeM. BryantN. BuchananH. BuddM. CadabeschiR. CallandD. CalvetJ. Rodr'iguezJ. CarrollS. CartwrightA. CarverR. CastilloM. CatanesiC. CavataA. CazesA. CerveraJ. CharrierC. ChávezS. ChoiS. CholletG. ChristodoulouP. ColasJ. ColemanW. ColemanG. CollazuolK. ConnollyP. CookeA. CurioniA. DabrowskaI. DankóR. DasG. DaviesS. DavisM. DayX. BroiseP. PerioG. RosaT. DealtryA. DebraineE. DelagnesA. DelbartC. DenshamF. LodovicoS. LuiseP. TranJ. DobsonJ. DoornbosU. DoreO. DrapierF. DruilloleF. DufourJ. DumarchezT. DurkinS. DytmanM. DziewieckiM. DziombaB. EllisonS. EmeryA. EreditatoJ. EscallierL. EscuderoL. EspositoW. FaszerM. FechnerA. FerreroA. FinchC. FisherM. FittonR. FlightD. ForbushE. FrankK. FranshamY. FujiiY. FukudaM. GallopV. GalymovG. GanetisF. GannawayA. GaudínJ. GawedaA. GendottiM. GeorgeS. GiffinC. GigantiK. GiljeI. GiomatarisJ. GiraudA. GhoshT. GolanM. GoldhaberJ. Gómez-CadenasS. GomiM. GoninM. GoyetteA. GrantN. GrantF. GrañenaS. GreenwoodP. GumplingerP. GuzowskiM. HaighK. HamanoC. HansenT. HaraP. HarrisonB. HartfielM. HartzT. HaruyamaR. HasanenT. HasegawaN. HastingsS. HastingsA. HatzikoutelisK. HayashiY. HayatoT. HaycockC. HeartyR. HelmerR. HendersonS. HerlantN. HigashiJ. HignightK. HiraideE. HiroseJ. HoleczekN. HonkanenS. HorikawaA. HyndmanA. IchikawaK. IekiM. IevaM. IidaM. IkedaJ. IlicJ. ImberT. IshidaC. IshiharaT. IshiiS. IvesM. IwasakiK. IyogiA. IzmaylovB. JamiesonR. JohnsonK. JooG. Jover-ManasC. JungH. KajiT. KajitaH. KakunoJ. KamedaK. KaneyukiD. KarlenK. KasamiV. KaseyI. KatoH. KawamukoE. KearnsL. KelletM. KhabibullinM. KhaleeqN. KhanA. KhotjantsevD. KiełczewskaT. KikawaJ. KimS. KimN. KimuraB. KirbyJ. KisielP. KitchingT. KobayashiG. KoganS. KoikeT. KomorowskiA. KonakaL. KormosA. KorzenevK. KosekiY. KoshioY. KouzumaK. KowalikV. KravtsovI. KresloW. KroppH. KuboJ. KubotaY. KudenkoN. KulkarniL. KurchaninovY. KurimotoR. KurjataY. KurosawaT. KutterJ. LagodaK. LaihemR. LangstaffM. LavederT. LawsonP. T. LeA. CoguieM. L. RossK. P. LeeM. LenckowskiC. LicciardiI. LimT. LindnerR. P. LitchfieldA. LonghinG. LópezP. LuL. LudoviciT. LuxM. MacaireL. MagalettiK. MahnY. MakidaC. J. MalafisM. MałekS. ManlyA. MarchionniC. MarkA. MarinoA. MaroneJ. MarteauJ. MartinT. MaruyamaT. MaryonJ. MarzecP. MasliahE. MathieC. MatsumuraK. MatsuokaV. MatveevK. MavrokoridisE. MazzucatoN. McCauleyK. McFarlandC. McgrewT. McLachlanI. MercerM. MessinaW. MetcalfC. MetelkoM. MezzettoP. MijakowskiC. A. MillerA. MinaminoO. MineevS. MineR. MinvielleG. MitukaM. MiuraK. MizouchiJ. MolsL. MonfregolaE. MonmartheF. MoreauB. MorganS. MoriyamaD. MorrisA. MuirA. MurakamiJ. MuratoreM. MurdochS. MurphyJ. MyslikG. NagashimaT. NakadairaM. NakahataT. NakamotoK. NakamuraS. NakayamaT. NakayaD. NaplesB. NelsonT. NichollsK. NishikawaH. NishinoK. NittaF. NizeryJ. NowakM. NoyY. ObayashiT. OgitsuH. OhhataT. OkamuraK. OkumuraT. OkusawaC. OhlmannK. OlchanskiR. OpenshawS. OserM. OtaniR. OwenY. OyamaT. OzakiM. PacV. PalladinoV. PaoloneP. PaulD. PayneG. PearceC. PearsonJ. PerkinM. PflegerF. PierreD. PierrepontP. PlonskiP. PoffenbergerE. PopławskaB. PopovM. PosiadałaJ. PoutissouR. PoutissouR. PreeceP. PrzewłockiW. QianJ. RaafE. RadicioniK. RamosP. RatoffT. RauferM. RavonelM. RaymondF. RetièreD. RichardsJ. RitouA. RobertP. RodriguesE. RondioM. RoneyM. RooneyD. RossB. RossiS. RothA. RubbiaD. RuterboriesR. SaccoS. SadlerK. SakashitaF. SánchezA. SarratK. SasakiP. SchaackJ. SchmidtK. ScholbergJ. SchwehrM. ScottD. ScullyY. SeiyaT. SekiguchiH. SekiyaG. ShefferM. ShibataY. ShimizuM. ShiozawaS. ShortM. SiyadD. SmithR. SmithM. SmyJ. SobczykH. SobelS. SooriyakumaranM. SorelJ. SpitzA. StahlP. StamoulisO. StarJ. StatterL. StawnyczyJ. SteinmannJ. SteffensB. StillM. StodulskiJ. StoneC. StrabelT. StraussR. SulejP. SutcliffeA. SuzukiK. SuzukiS. SuzukiS. SuzukiY. SuzukiJ. SwierblewskiT. SzegłowskiM. SzeptyckaR. TacikM. TadaA. TadepalliM. TaguchiS. TakahashiA. TakedaY. TakenagaY. TakeuchiH. TanakaK. TanakaM. TanakaM. TanakaN. TanimotoK. TashiroI. TaylorA. TerashimaD. TerhorstR. TerriL. ThompsonA. ThorleyM. ThorpeW. TokiT. TomaruY. TotsukaC. TouramanisT. TsukamotoV. TvaskisM. TzanovY. UchidaK. UenoM. UsseglioA. VacheretM. VaginsJ. SchalkwykJ. VanelG. VasseurO. VeledarP. VincentT. WachalaA. WaldronC. WalterP. WandererM. WardG. WardD. WarkD. WarnerM. WasckoA. WeberR. WendellJ. WendlandN. WestL. WhiteheadG. WikstromR. WilkesM. WilkingZ. Williamson
Scholarpedia
Gemma: Open Models Based on Gemini Research and Technology
Gemma Team Thomas Mesnard;Cassidy HardinRobert DadashiSurya BhupatirajuShreya PathakL. SifreMorgane RivièreMihir KaleJ Christopher LoveP. TaftiL'eonard HussenotAakanksha ChowdheryAdam RobertsAditya BaruaAlex BotevAlex Castro-RosAmbrose SloneAm'elie H'eliouAndrea TacchettiAnna BulanovaAntonia PatersonBeth TsaiBobak ShahriariCharline Le LanChristopher A. Choquette-ChooClé-ment CrepyDaniel CerDaphne IppolitoDavid ReidElena BuchatskayaEric NiEric NolandGeng YanGeorge TuckerGeorge-Christian MuraruGrigory RozhdestvenskiyH. MichalewskiIan TenneyIvan GrishchenkoJacob AustinJames KeelingJane LabanowskiJean-Baptiste LespiauJ. StanwayJenny BrennanJeremy ChenJohan FerretJustin ChiuJ. Mao-JonesKather-ine LeeKathy YuKatie MillicanLars Lowe SjoesundLisa LeeLucas DixonMachel ReidMaciej MikułaMateo WirthMichael SharmanNikolai ChinaevNithum ThainOlivier BachemOs-car ChangO. WahltinezPaige BaileyPaul MichelPetko YotovPier Giuseppe SessaR. ChaabouniR. ComanescuReena JanaRohan AnilRoss McIlroyRuibo LiuRyan MullinsSamuel L. SmithSebastian BorgeaudSertan GirginSholto DouglasShree PandyaSiamak ShakeriSoham DeTed KlimenkoTom HenniganVladimir FeinbergWojciech StokowiecYu-hui ChenZafarali AhmedZhitao GongTris WarkentinLudovic PeranMinh GiangClément FarabetO. VinyalsJeffrey DeanK. KavukcuogluD. HassabisZ. GhahramaniDouglas EckJoelle BarralFernando PereiraEli CollinsArmand JoulinNoah FiedelEvan SenterAlek AndreevKathleen Kenealy
ArXiv Published 2024/03/13

Summary:

This work introduces Gemma, a family of lightweight, state-of-the art open models built from the research and technology used to create Gemini models, and presents comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of model development.

OpenVLA: An Open-Source Vision-Language-Action Model
Moo Jin Kim;Karl PertschSiddharth KaramchetiTed XiaoA. BalakrishnaSuraj NairRafael RafailovEthan FosterGrace LamPannag R. SanketiQuan VuongThomas KollarBenjamin BurchfielRuss TedrakeDorsa SadighSergey LevinePercy LiangChelsea Finn
ArXiv Published 2024/06/13

Summary:

OpenVLA, a 7B-parameter open-source VLA trained on a diverse collection of 970k real-world robot demonstrations, is introduced and it is shown that it can effectively fine-tune OpenVLA for new settings, with especially strong generalization results in multi-task environments involving multiple objects and strong language grounding abilities.

StarCoder 2 and The Stack v2: The Next Generation
Anton Lozhkov;Raymond LiLoubna Ben AllalFederico CassanoJ. Lamy-PoirierNouamane TaziAo TangDmytro PykhtarJiawei LiuYuxiang WeiTianyang LiuMax TianDenis KocetkovArthur ZuckerYounes BelkadaZijian WangQian LiuDmitry AbulkhanovIndraneil PaulZhuang LiWen-Ding LiMegan L. RisdalJia LiJian ZhuTerry Yue ZhuoEvgenii ZheltonozhskiiNii Osae Osae DadeW. YuLucas KraussNaman JainYixuan SuXuanli HeManan DeyEdoardo AbatiYekun ChaiNiklas MuennighoffXiangru TangMuhtasham OblokulovChristopher AkikiMarc MaroneChenghao MouMayank MishraA. GuBinyuan HuiTri DaoA. ZebazeOlivier DehaeneN. PatryCanwen XuJulian J. McAuleyHan HuTorsten ScholakSébastien PaquetJennifer RobinsonC. AndersonNicolas ChapadosM. PatwaryNima TajbakhshYacine JerniteCarlos Muñoz FerrandisLingming ZhangSean HughesThomas WolfArjun GuhaL. V. WerraH. D. Vries
ArXiv Published 2024/02/29

Summary:

The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder2, a large model that significantly outperforms other models of comparable size and makes the model weights available under an OpenRAIL license.

OLMo: Accelerating the Science of Language Models
Dirk Groeneveld;Iz BeltagyPete WalshAkshita BhagiaRodney KinneyOyvind TafjordA. JhaHamish IvisonIan MagnussonYizhong WangShane AroraDavid AtkinsonRussell AuthurKhyathi Raghavi ChanduArman CohanJennifer DumasYanai ElazarYuling GuJack HesselTushar KhotWilliam MerrillJacob Daniel MorrisonNiklas MuennighoffAakanksha NaikCrystal NamMatthew E. PetersValentina PyatkinAbhilasha RavichanderDustin SchwenkSaurabh ShahWill SmithEmma StrubellNishant SubramaniMitchell WortsmanPradeep DasigiNathan LambertKyle RichardsonLuke S. ZettlemoyerJesse DodgeKyle LoLuca SoldainiNoah A. SmithHanna Hajishirzi
Published 2024/02/01

Summary:

OLMo is built, a competitive, truly Open Language Model, to enable the scientific study of language models and it is hoped this release will empower the open research community and inspire a new wave of innovation.

Book review: Christoph Molnar. 2020. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable
R. K. Sinha
Metamorphosis Published 2024/06/01
Heart Disease Prediction Using Machine Learning Algorithms
Dina Jrab;Derar EleyanA. EleyanTarek Bejaoui
2024 International Conference on Smart Applications, Communications and Networking (SmartNets) Published 2024/05/28

Summary:

The experimental results show that the ANDV A F -test feature selection algorithm along with the Support Vector Machine classifier, is a viable approach for developing an advanced intelligent system that can identify heart disease.

The EMBL-EBI Job Dispatcher sequence analysis tools framework in 2024
F. Madeira;Nandana MadhusoodananJoon LeeAlberto EusebiAnia NiewielskaA. TiveyRodrigo LopezSarah Butcher
Nucleic Acids Research Published 2024/04/10

Summary:

Recent improvements to Job Dispatcher are overviews, including its brand new website and documentation, enhanced visualisations, improved job management, and a rising trend of user reliance on the service from low- and middle-income regions.

TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods
Gary S. Collins;K. MoonsP. DhimanR. RileyA. L. BeamBen Van CalsterMarzyeh GhassemiXiaoxuan LiuJohannes B ReitsmaM. van SmedenA. BoulesteixJ. CamaradouL. CeliS. DenaxasA. DennistonBen GlockerRobert M GolubHugh HarveyG. HeinzeMichael M. HoffmanA. KengneEmily LamNaomi LeeElizabeth W LoderLena Maier-HeinB. MateenM. MccraddenLauren Oakden-RaynerJohan OrdishRichard ParnellSherri RoseKarandeep SinghL. WynantsP. Logullo
The BMJ Published 2024/04/16

Summary:

The development of TRIPOD+AI is described and the expanded 27 item checklist with more detailed explanation of each reporting recommendation is presented, and the TRIPOD+AI for Abstracts checklist is presented.

Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Luca Soldaini;Rodney KinneyAkshita BhagiaDustin SchwenkDavid AtkinsonRussell AuthurBen BoginKhyathi Raghavi ChanduJennifer DumasYanai ElazarValentin HofmannA. JhaSachin KumarL. LucyXinxi LyuNathan LambertIan MagnussonJacob Daniel MorrisonNiklas MuennighoffAakanksha NaikCrystal NamMatthew E. PetersAbhilasha RavichanderKyle RichardsonZejiang ShenEmma StrubellNishant SubramaniOyvind TafjordPete WalshLuke S. ZettlemoyerNoah A. SmithHanna HajishirziIz BeltagyDirk GroeneveldJesse DodgeKyle Lo
ArXiv Published 2024/01/31

Summary:

To facilitate scientific research on language model pretraining, Dolma is curate and released, a three-trillion-token English corpus built from a diverse mixture of web content, scientific papers, code, public-domain books, social media, and encyclopedic materials.

GPT-4 passes the bar exam
D. Katz;M. BommaritoShang GaoPablo Arredondo
Philosophical transactions. Series A, Mathematical, physical, and engineering sciences Published 2024/02/26

Summary:

GPT-4 significantly outperforms both human test-takers and prior models, demonstrating a 26% increase over ChatGPT and beating humans in five of seven subject areas, document not just the rapid and remarkable advance of large language model performance generally, but also the potential for such models to support the delivery of legal services in society.

RewardBench: Evaluating Reward Models for Language Modeling
Nathan Lambert;Valentina PyatkinJacob Daniel MorrisonLester James Validad MirandaBill Yuchen LinKhyathi Raghavi ChanduNouha DziriSachin KumarTom ZickYejin ChoiNoah A. SmithHanna Hajishirzi
ArXiv Published 2024/03/20

Summary:

The RewardBench dataset is a collection of prompt-chosen-rejected trios spanning chat, reasoning, and safety, to benchmark how reward models perform on challenging, structured and out-of-distribution queries and presents many findings on propensity for refusals, reasoning limitations, and instruction following shortcomings of various reward models towards a better understanding of the RLHF process.

Iterative enhancement fusion-based cascaded model for detection and localization of multiple disease from CXR-Images
Satvik Vats;Vikrant SharmaKaran SinghDevesh Pratap SinghMohd Yazid BajuriDavid TaniarNisreen InnabA. MouldiA. Ahmadian
Expert Syst. Appl. Published 2024/06/01
Quantum error correction below the surface code threshold
R. Acharya;Laleh Aghababaie-BeniI. AleinerTrond I. AndersenM. AnsmannF. AruteK. AryaA. AsfawN. AstrakhantsevJ. AtalayaR. BabbushDave BaconB. BallardJ. C. BardinJ. BauschA. BengtssonA. BilmesS. BlackwellS. BoixoG. BortoliA. BourassaJ. BovairdL. BrillM. BroughtonD. A. BrowneB. BucheaB. BuckleyD. BuellT. BurgerB. BurkettN. BushnellA. CabreraJ. CamperoHung-Shen ChangYu ChenZijun ChenB. ChiaroDesmond ChikCharina ChouJ. ClaesA. ClelandJ. CoganR. CollinsP. ConnerW. CourtneyA. CrookB. CurtinSayan DasA. DaviesL. D. LorenzoD. DebroyS. DemuraM. DevoretA. D. PaoloP. DonohoeI. DrozdovA. DunsworthC. EarleT. EdlichA. EickbuschA. M. ElbagM. ElzoukaC. EricksonL. FaoroE. FarhiV. S. FerreiraL. F. BurgosE. ForatiA. FowlerB. FoxenS. GanjamG. GarciaR. Gasca'Elie GenoisW. GiangC. GidneyD. GilboaR. GosulaA. DauD. GraumannA. GreeneJ. GrossS. HabeggerJohn HallMichael C. HamiltonM. HansenM. HarriganS. D. HarringtonF. J. H. HerasS. HeslinP. HeuO. HiggottG. HillJ. HiltonGeorge HollandSabrina HongHsin-Yuan HuangA. HuffW. HugginsL. IoffeS. IsakovJ. IvelandE. JeffreyZhang JiangCody JonesS. JordanC. JoshiP. JuhásD. KafriHui KangA. KaramlouK. KechedzhiJ. KellyT. KhaireT. KhattarM. KhezriSeon KimP. KlimovA. KlotsB. KobrinPushmeet KohliA. KorotkovF. KostritsaRobin KothariBorislav M. KozlovskiiJ. KreikebaumV. D. KurilovichN. LacroixD. LandhuisT. Lange-DeiB. W. LangleyP. LaptevK. LauL. L. GuevelJ. LedfordKenny LeeY. LenskyShannon LeonB. LesterWing Yan LiYin LiA. LillWayne LiuW. LivingstonA. LocharlaE. LuceroD. LundahlA. LuntS. MadhukF. MaloneA. MaloneySalvatore Mandr'aL. S. MartinSteven MartinO. MartinC. MaxfieldJ. McCleanM. McEwenS. MeeksA. MegrantX. MiK. MiaoA. MieszalaR. MolaviS. MolinaS. MontazeriA. MorvanR. MovassaghW. MruczkiewiczO. NaamanMatthew NeeleyC. NeillA. NersisyanH. NevenMichael NewmanJ. NgA. NguyenM. NguyenChia-Hung NiT. O’BrienW. D. OliverA. OpremcakK. OttossonA. PetukhovA. PizzutoJohn PlattR. PotterO. PritchardL. PryadkoC. QuintanaG. RamachandranM. ReagorD. M. RhodesG. RobertsEliot RosenbergEmma L. RosenfeldP. RoushanN. RubinN. SaeiD. SankK. SankaragomathiK. SatzingerH. SchurkusC. SchusterA. W. SeniorM. ShearnA. ShorterN. ShuttyV. ShvartsShraddha SinghV. SivakJ. SkruznyS. SmallV. SmelyanskiyW. C. SmithRolando D SommaS. SpringerG. SterlingD. StrainJ. SuchardA. SzaszA. SzteinD. ThorA. TorresM. M. TorunbalciA. VaishnavJ. VargasS. VdovichevG. VidalB. VillalongaC. V. HeidweillerS. WaltmanShannon X. WangB. WareKate WeberT. WhiteKristi WongB. WooC. XingZ. YaoP. YehB. YingJuhwan YooN. YosriG. YoungAdam ZalcmanYaxing ZhangN. ZhuN. Zobrist
Nature Published 2024/08/24

Summary:

Two below-threshold surface code memories on Willow, a distance-7 code and a distance-5 code integrated with a real-time decoder, indicate device performance that, if scaled, could realize the operational requirements of large-scale fault-tolerant quantum algorithms.

Navigating the confluence of artificial intelligence and education for sustainable development in the era of industry 4.0: Challenges, opportunities, and ethical dimensions
A. Abulibdeh;Esmat ZaidanR. Abulibdeh
Journal of Cleaner Production Published 2024/01/01
Artificial intelligence, firm growth, and product innovation
T. Babina;A. FedykA. HeJames Hodson
Journal of Financial Economics Published 2024/01/01

Summary:

A new measure of firm-level AI investments is proposed, using a unique combination of worker resume and job postings datasets, which reveals a stark increase in AI investments across sectors.

KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
Coleman Hooper;Sehoon KimHiva MohammadzadehMichael W. MahoneyY. ShaoKurt KeutzerA. Gholami
ArXiv Published 2024/01/31

Summary:

This work facilitates low precision KV cache quantization by incorporating several novel methods, including per-Channel Key Quantization, and develops custom CUDA kernels for KVQuant, which enables serving LLaMA-7B with a context length of up to 1 million on a single A100-80GB GPU and up to 10 million on an 8-GPU system.

PaliGemma: A versatile 3B VLM for transfer
Lucas Beyer;A. SteinerAndré Susano PintoAlexander KolesnikovXiao WangDaniel M. SalzMaxim NeumannIbrahim M. AlabdulmohsinMichael TschannenEmanuele BugliarelloThomas UnterthinerDaniel KeysersSkanda KoppulaFangyu LiuAdam GrycnerA. GritsenkoN. HoulsbyManoj KumarKeran RongJulian Martin EisenschlosRishabh KabraMatthias BauerMatko BovsnjakXi ChenMatthias MindererP. VoigtlaenderIoana BicaIvana BalazevicJ. PuigcerverPinelopi PapalampidiOlivier HenaffXi XiongRadu SoricutJeremiah HarmsenXiao-Qi Zhai
ArXiv Published 2024/07/10

Summary:

PaliGemma is an open Vision-Language Model that is based on the SigLIP-So400m vision encoder and the Gemma-2B language model that achieves strong performance on a wide variety of open-world tasks.

jtools: Analysis and Presentation of Social Scientific Data
Jacob A. Long
J. Open Source Softw. Published 2024/09/06
Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models
Matt Deitke;Christopher ClarkSangho LeeRohun TripathiYue YangJae Sung ParkMohammadreza SalehiNiklas MuennighoffKyle LoLuca SoldainiJiasen LuTaira AndersonErin BransomKiana EhsaniHuong NgoYenSung ChenAjay PatelMark YatskarChristopher Callison-BurchAndrew HeadRose HendrixF. BastaniEli VanderBiltNathan LambertYvonne ChouArnavi ChhedaJenna SparksSam SkjonsbergMichael SchmitzAaron SarnatByron BischoffPete WalshChristopher NewellPiper WoltersTanmay GuptaKuo-Hao ZengJon BorchardtDirk GroeneveldJennifer DumasCrystal NamSophie LebrechtCaitlin Marie WittlifCarissa SchoenickOscar MichelRanjay KrishnaLuca WeihsNoah A. SmithHanna HajishirziRoss GirshickAli FarhadiAniruddha Kembhavi
ArXiv

Summary:

Molmo is presented, a new family of VLMs that are state-of-the-art in their class of openness, with a novel, highly detailed image caption dataset collected entirely from human annotators using speech-based descriptions.

Cosmos World Foundation Model Platform for Physical AI
Nvidia Niket Agarwal;Arslan AliMaciej BalaYogesh BalajiErik BarkerTiffany CaiPrithvijit ChattopadhyayYongxin ChenYin CuiYifan DingDaniel DworakowskiJiaojiao FanMichele FenziFrancesco FerroniSanja FidlerDieter FoxSongwei GeYunhao GeJinwei GuSiddharth GururaniEthan HeJiahui HuangJ. HuffmanPooya JannatyJingyi JinSeung Wook KimGergely Kl'arGrace LamShiyi LanL. Leal-TaixéAnqi LiZhaoshuo LiChen-Hsuan LinTsung-Yi LinHuan LingMingkun LiuXian LiuAlice LuoQianli MaHanzi MaoKaichun MoA. MousavianSeungjun NahSriharsha NivertyDavid PageDespoina PaschalidouZeeshan PatelLindsey PavaoMorteza RamezanaliF. RedaXiao-Shuai RenVasanth Rao Naik SabavatEd SchmerlingStella ShiBartosz StefaniakShitao TangLyne P. TchapmiPrzemek TredakWei-Cheng TsengJ. VargheseHao WangHaoxiang WangHengyi WangTingwei WangFangyin WeiXinyue WeiJay Zhangjie WuJiashu XuWei YangLin Yen-ChenXiaohui ZengYuan ZengJing ZhangQinsheng ZhangYuxuan ZhangQingqing ZhaoArtur Zolkowski
ArXiv Published 2025/01/07

Summary:

The Cosmos World Foundation Model Platform is presented to help developers build customized world models for their Physical AI setups and position a world foundation model as a general-purpose world model that can be fine-tuned into customized world models for downstream applications.

Gemma 3 Technical Report
Gemma Team Aishwarya Kamath;Johan FerretShreya PathakNino VieillardRamona MerhejSarah PerrinTatiana MatejovicovaAlexandre Ram'eMorgane RivièreLouis RouillardThomas MesnardGeoffrey CideronJean-Bastien GrillSabela RamosEdouard YvinecMichelle CasbonEtienne PotIvo PenchevGael LiuFrancesco VisinKathleen KenealyLucas BeyerXiaohai ZhaiAnton TsitsulinR. Busa-FeketeAlex FengNoveen SachdevaBenjamin ColemanYi GaoBasil MustafaIain BarrEmilio ParisottoDavid TianMatan EyalColin CherryJan-Thorsten PeterDanila SinopalnikovSurya BhupatirajuRishabh AgarwalMehran KazemiDan MalkinRavin KumarDavid VilarIdan BrusilovskyJiaming LuoA. SteinerAbe FriesenAbhanshu SharmaAbheesht SharmaAdi Mayrav GiladyAdrian GoedeckemeyerAlaa SaadeAlexander KolesnikovAlexei BendeburyAlvin AbdagicAmit VadiAndr'as GyorgyAndré Susano PintoAnil DasAnkur BapnaAntoine MiechAntoine YangAntonia PatersonAshish ShenoyAyan ChakrabartiBilal PiotBoxi WuBobak ShahriariBryce PetriniCharlie ChenCharline Le LanChristopher A. Choquette-ChooCJ CareyC. BrickDaniel DeutschDanielle EisenbudDee CattleDerek ChengDimitris PaparasDivyashree Shivakumar SreepathihalliDoug ReidDustin TranDustin ZelleEric NolandErwin HuizengaE. KharitonovFrederick LiuG. AmirkhanyanGlenn CameronHadi HashemiHanna Klimczak-Pluci'nskaHarman SinghHarsh MehtaHarshal Tushar LehriHussein HazimehIan BallantyneIdan SzpektorIvan NardiniJean Pouget-AbadieJetha ChanJoe StantonJ. Michael WietingJonathan LaiJordi OrbayJoe FernandezJoshua NewlanJunsong JiJyotinder SinghKat BlackKathy YuKevin HuiKiran VodrahalliKlaus GreffLinhai QiuMarcella ValentineMarina CoelhoMarvin RitterMatt HoffmanMatthew WatsonMayank ChaturvediMichael MoynihanMin MaNabila BabarNatasha NoyNathan ByrdNick RoyNikola MomchevNilay ChauhanOskar BunyanPankil BotardaPaul CaronP. RubensteinPhil CullitonPhilipp SchmidPier Giuseppe SessaPingmei XuP. StańczykP. TaftiRakesh ShivannaRenjie WuRenke PanR. RokniRob WilloughbyRohith ValluRyan MullinsSammy JeromeSara SmootSertan GirginShariq IqbalShashir ReddyShruti ShethSiim PõderSijal BhatnagarS. PanyamSivan EigerSusan ZhangTianqi LiuTrevor YacovoneT. LiechtyUday KalraUtku EvciVedant MisraVincent RoseberryVladimir FeinbergVlad KolesnikovWoohyun HanWoosuk KwonXi ChenYinlam ChowYuvein ZhuZichuan WeiZ. EgyedVictor CotrutaMinh GiangPhoebe KirkAnand RaoJessica LoErica MoreiraLuiz Gustavo MartinsOmar SansevieroLucas GonzalezZach GleicherTris WarkentinV. MirrokniEvan SenterEli CollinsJoelle BarralZ. GhahramaniR. HadsellYossi MatiasD. SculleySlav PetrovNoah FiedelNoam M. ShazeerO. VinyalsJeffrey DeanD. HassabisK. KavukcuogluClément FarabetElena BuchatskayaJean-Baptiste AlayracRohan AnilDmitry LepikhinSebastian BorgeaudOlivier BachemArmand JoulinAlek AndreevCassidy HardinRobert DadashiL'eonard Hussenot
ArXiv Published 2025/03/25

Summary:

A novel post-training recipe significantly improves the math, chat, instruction-following and multilingual abilities, making Gemma3-4B-IT competitive with Gemma2-27B-IT and Gemma3-27B-IT comparable to Gemini-1.5-Pro across benchmarks.

LEADERSHIP IN VIRTUAL TEAMS
K. A. Tatarinov;S. M. MuzykaN. N. AnikienkoI. A. Savchenko
Вестник Алтайской академии экономики и права

Summary:

This paper tries to answer the question: How does ICT affect the leadership in virtual teams?

The Galaxy platform for accessible, reproducible, and collaborative data analyses: 2024 update
Linelle Ann L Enis Olivier Ahmed H Wendi A Dannon Madeline Abueg Afgan Allart Awan Bacon Baker Bassetti Batut;Linelle AbuegE. AfganOlivier AllartAhmed H AwanW. BaconD. BakerMadeline E. BassettiBérénice BatutMatthias BerntDaniel J. BlankenbergAureliano BombarelyAnthony BretaudeauCatherine J. BromheadMelissa L BurkePatrick K CaponMartin ČechMaría Chavero-DíezJohn M ChiltonTyler J CollinsFrederik CoppensNate CoraorG. CuccuruFabio CumboJohn DavisPaul F De GeestWillem de KoningMartin DemkoAssunta D. DesantoJ. D. BeginesMaria A. DoyleBert DroesbekeAnika Erxleben-EggenhoferM. FöllGiulio FormentiA. FouillouxRendani GangazheTanguy GenthonJeremy GoecksAlejandra N Gonzalez BeltranN. GoonasekeraNadia GouéTim J. GriffinBjörn A. GrüningAysam GuerlerSveinung GundersenOve Johan Ragnar GustafssonChristina HallThomas W HarropHelge HechtAlireza HeidariTillman HeisnerFlorian HeylSaskia D. HiltemannH. HotzCameron J HydeP. JagtapJulia JakielaJames E. JohnsonJayadev JoshiMarie JosséKhaled Jum’ahMatúš KalašK. KamienieckaTunc KayikciogluM. KonkolLeonid KostrykinNatalie KucherAnup KumarMira KuntzDelphine LarivièreRoss LazarusY. L. BrasGildas Le CorguilléJustin LeeSimone LeoLeandro LiborioRomane LiboubanDavid López TaberneroLucille Lopez-DelisleLaila S LosAlexandru MahmoudIgor MakuninPierre MarinSubina P. MehtaWinnie MokPablo A MorenoFrançois Morier-GenoudStephen MosherTeresa MüllerEngy NasrA. NekrutenkoTiffanie M NelsonAsime ObaAlexander E. OstrovskyPolina V PoluninaKrzysztof PoterlowiczE. PriceGareth R PriceH. RascheBryan RaubenoltColine RoyauxLuke SargentMichelle T SavageVolodymyr SavchenkoDenys SavchenkoMichael C. SchatzPauline SeguineauBeatriz Serrano-SolanoNicola SoranzoSanjay Kumar SrikakulamKeith SudermanAnna SymeM. TangaroJonathan TeddsM. TekmanWai Cheng (Mike) ThangAnil S. ThankiMichael UhlMarius van den BeekDeepti VarshneyJennifer VessioPavankumar VidemGreg Von KusterGregory R WatsonNatalie Whitaker-AllenUwe WinterM. WolstencroftF. ZambelliP. ZierepRand Zoabi
Nucleic Acids Research Published 2024/05/20

Summary:

Code development continues in line with the Galaxy Project roadmap, with improvements to job scheduling and the user interface, and general purpose graphical processing units (GPGPU) access for cutting-edge methods, and licensed tool support.

Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model
A. Ustun;Viraat AryabumiZheng-Xin YongWei-Yin KoDaniel D'souzaGbemileke OniludeNeel BhandariShivalika SinghHui-Lee OoiAmr KayidFreddie VargusPhil BlunsomShayne LongpreNiklas MuennighoffMarzieh FadaeeJulia KreutzerSara Hooker
Published 2024/02/12

Summary:

This work introduces Aya, a massively multilingual generative language model that follows instructions in 101 languages of which over 50% are considered as lower-resourced, and introduces extensive new evaluation suites that broaden the state of the art for multilingual eval across 99 languages.

DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
DeepSeek-AI;Qihao ZhuDaya GuoZhihong ShaoDejian YangPeiyi WangRunxin XuY. WuYukun LiHuazuo GaoShirong MaWangding ZengXiao BiZihui GuHanwei XuDamai DaiKai DongLiyue ZhangYishi PiaoZhibin GouZhenda XieZhewen HaoBing-Li WangJun-Mei SongDeli ChenXin XieKang GuanYu-mei YouA. LiuQiushi DuW. GaoXuan LuQinyu ChenYaohui WangC. DengJiashi LiChenggang ZhaoC. RuanFuli LuoW. Liang
ArXiv Published 2024/06/17

Summary:

DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens, which substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks.

BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions
Terry Yue Zhuo;Minh Chien VuJenny ChimHan HuWenhao YuRatnadira WidyasariImam Nur Bani YusufHaolan ZhanJunda HeIndraneil PaulSimon BrunnerChen GongThong HoangA. ZebazeXiao-ke HongWen-Ding LiJean KaddourMinglian XuZhihan ZhangPrateek YadavNaman JainAlex GuZhoujun ChengJiawei LiuQian LiuZijian WangDavid LoBinyuan HuiNiklas MuennighoffDaniel FriedXiao-Nan DuH. D. VriesL. V. Werra
ArXiv Published 2024/06/22

Summary:

An extensive evaluation of 60 LLMs shows that LLMs are not yet capable of following complex instructions to use function calls precisely, with scores up to 60%, significantly lower than the human performance of 97%, which underscores the need for further advancements in this area.

Can Generative AI improve social science?
Christopher A Bail
Proceedings of the National Academy of Sciences of the United States of America Published 2024/05/09

Summary:

It is argued that social scientists can address many of these limitations of Generative AI by creating open-source infrastructure for research on human behavior, not only to ensure broad access to high-quality research tools, but also because the progress of AI will require deeper understanding of the social forces that guide human behavior.

UniProt: the Universal Protein Knowledgebase in 2025
Alex Maria-Jesus Sandra Michele Aduragbemi Shadab Emily Bateman Martin Orchard Magrane Adesina Ahmad Bowle;Alex BatemanM. MartinSandra OrchardM. MagraneA. AdesinaShadab AhmadE. Bowler-BarnettHema Bye-A-JeeD. CarpentierPaulus DennyJun FanPenelope GarmiriLeonardo Jose da Costa GonzalesAbdulrahman HusseinAlexandr IgnatchenkoGiuseppe InsanaRizwan IshtiaqVishal JoshiDushyanth JyothiSwaathi KandasaamyA. LockAurélien LucianiJie LuoYvonne LussiJ. MarinPedro RaposoDan RiceRafael SantosElena SperettaJames L. StephensonPrabhat TotooNidhi TyagiNadya UrakovaPreethi VasudevKate WarnerSupun WijerathneConny Wing-Heng YuR. ZaruAlan BridgeL. AimoGhislaine Argoud-PuyA. AuchinclossK. AxelsenParit BansalDelphine BaratinTeresa M Batista NetoMarie-Claude BlatterJerven T. BollemanE. BoutetLionel BreuzaBlanca Cabrera GilCristina Casals-CasasKamal Chikh EchioukhE. CoudertB. CucheEdouard de CastroA. EstreicherM. FamigliettiM. FeuermannElisabeth GasteigerPascale GaudetS. GehantV. GerritsenA. GosNadine GruazC. HuloNevila Hyka-NouspikelF. JungoArnaud KerhornouP. MercierD. LieberherrP. MassonA. MorgatS. PaesanoI. PedruzziS. PilboutL. PourcelS. PouxM. PozzatoManuela PruessNicole RedaschiC. RivoireChristian J A SigristKarin SonessonS. SundaramAnastasia SveshnikovaCathy H. WuC. ArighiChuming ChenYongxing ChenHongzhan HuangK. LaihoMinna LehvaslaihoPeter B. McGarveyD. NataleKaren RossC. R. VinayakaYuqi WangJian Zhang
Nucleic Acids Research Published 2024/11/18
Structured information extraction from scientific text with large language models
John Dagdelen;Alex DunnSanghoon LeeNicholas WalkerAndrew S. RosenG. CederKristin A. PerssonAnubhav Jain
Nature Communications Published 2024/02/15

Summary:

A simple approach to joint named entity recognition and relation extraction is presented and how pretrained large language models can be fine-tuned to extract useful records of complex scientific knowledge is demonstrated.

BLINK: Multimodal Large Language Models Can See but Not Perceive
Xingyu Fu;Yushi HuBangzheng LiYu FengHaoyu WangXudong LinDan RothNoah A. SmithWei-Chiu MaRanjay Krishna
ArXiv Published 2024/04/18

Summary:

Blink, a new benchmark for multimodal language models (LLMs) that focuses on core visual perception abilities not found in other evaluations, is introduced and will stimulate the community to help multimodal LLMs catch up with human-level visual perception.

A foundation model for clinical-grade computational pathology and rare cancers detection
Eugene Vorontsov;A. BozkurtAdam CassonGeorge ShaikovskiMichal ZelechowskiKristen SeversonEric ZimmermannJames HallNeil TenenholtzNicolò FusiEllen YangPhilippe MathieuA. van EckDonghun LeeJulian ViretEric RobertYi Kan WangJ. KunzMatthew C H LeeJan H BernhardR. GodrichGerard OakleyEwan MillarMatthew G HannaHannah Y WenJuan RetameroWilliam A. MoyeRazik YousfiC. KananD.S. KlimstraB. RothrockSiqi LiuThomas J Fuchs
Nature Medicine Published 2024/07/22

Summary:

Virchow is presented, the largest foundation model for computational pathology to date, and it is demonstrated that a large foundation model enables pan-cancer detection and can achieve similar performance to tissue-specific clinical-grade models in production and outperform them on some rare variants of cancer.

Testing theory of mind in large language models and humans
James W. A. Strachan;Dalila AlbergoGiulia BorghiniOriana PansardiE. ScalitiSaurabh GuptaKrati SaxenaAlessandro RufoStefano PanzeriGuido ManziMichael S. A. GrazianoCristina Becchio
Nature Human Behaviour Published 2024/05/20

Summary:

It is demonstrated that large language models exhibit behaviour that is consistent with the outputs of mentalistic inference in humans but also highlights the importance of systematic testing to ensure a non-superficial comparison between human and artificial intelligences.

Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning
Shivalika Singh;Freddie VargusDaniel DsouzaBörje F. KarlssonAbinaya MahendiranWei-Yin KoHerumb ShandilyaJay PatelDeividas MataciunasLaura OMahonyMike ZhangRamith HettiarachchiJoseph WilsonMarina MachadoLuisa Souza MouraDominik Krzemi'nskiHakimeh FadaeiIrem ErgunIfeoma OkohAisha AlaagibOshan MudannayakeZaid AlyafeaiMinh Chien VuSebastian RuderSurya GuthikondaEmad A. AlghamdiSebastian GehrmannNiklas MuennighoffMax BartoloJulia KreutzerA. UstunMarzieh FadaeeSara Hooker
Published 2024/02/09

Summary:

The primary goal is to bridge the language gap by building a human-curated instruction-following dataset spanning 65 languages, and the most extensive multilingual collection to date, comprising 513 million instances through templating and translating existing datasets across 114 languages.

A review of graph neural networks: concepts, architectures, techniques, challenges, datasets, applications, and future directions
Bharti Khemani;S. PatilK. KotechaSudeep Tanwar
Journal of Big Data Published 2024/01/16

Summary:

The paper delves into specific GNN models like graph convolution networks (GCNs), GraphSAGE, and graph attention networks (GATs), which are widely used in various applications today and offers an extensive overview of the landscape of GNN research and its practical implementations.

Large language models (LLMs) as agents for augmented democracy
Jairo F. Gudiño;Umberto GrandiCésar A. Hidalgo
Philosophical transactions. Series A, Mathematical, physical, and engineering sciences Published 2024/05/06
Hardware implementation of memristor-based artificial neural networks
F. Aguirre;A. SebastianM. Le GalloWenhao SongTong WangJ. J. YangWei D. LuMeng-Fan ChangD. IelminiYuch-Chi YangAdnan MehonicAnthony J. KenyonM. A. VillenaJ. RoldánYuting WuHung-Hsi HsuN. RaghavanJ. SuñéEnrique MirandaA. EltawilGianluca SettiKamilya SmagulovaK. N. SalamaO. KrestinskayaXiaobing YanK. AngSamarth JainSifan LiO. AlharbiS. PazosM. Lanza
Nature Communications Published 2024/03/04

Summary:

This work reviews the latest efforts for achieving hardware-based memristive artificial neural networks (ANNs), describing with detail the working principia of each block and the different design alternatives with their own advantages and disadvantages, as well as the tools required for accurate estimation of performance metrics.

CodeGemma: Open Code Models Based on Gemma
CodeGemma Team Heri Zhao;Jeffrey HuiJoshua HowlandNam NguyenSiqi ZuoAndrea HuChristopher A. Choquette-ChooJingyue ShenJoe KelleyKshi-tij BansalLuke VilnisMateo WirthPaul MichelPeter ChoyPratik JoshiRavin KumarSarmad HashmiShubham AgrawalZhitao GongJane FineTris WarkentinAle Jakse HartmanBin NiKathy KorevecKelly SchaeferScott Huffman
ArXiv Published 2024/06/17
SwissDock 2024: major enhancements for small-molecule docking with Attracting Cavities and AutoDock Vina
Marine Bugnon;U. RöhrigMathilde GoullieuxMarta A. S. PerezAntoine DainaO. MichielinVincent Zoete
Nucleic Acids Research Published 2024/04/30

Summary:

The latest version of SwissDock is presented, in which EADock DSS has been replaced by two state-of-the-art docking programs, i.e. Attracting Cavities and AutoDock Vina, and a user-friendly command-line access is developed which enables covalent ligand docking with Attracting Cavities.

Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation
Axel Sauer;Frederic BoeselTim DockhornA. BlattmannPatrick EsserRobin Rombach
SIGGRAPH Asia 2024 Conference Papers Published 2024/03/18

Summary:

This work introduces Latent Adversarial Diffusion Distillation (LADD), a novel distillation approach overcoming the limitations of ADD and utilizes generative features from pretrained latent diffusion models, enabling high-resolution multi-aspect ratio image synthesis.

Who owns this site?

Article Galaxy Pages is a free service from Research Solutions, a company that offers access to content in collaboration with publishing partners, online repositories and discovery services.